An Anytime Scheme for Bounding Posterior Beliefs
نویسندگان
چکیده
This paper presents an any-time scheme for computing lower and upper bounds on posterior marginals in Bayesian networks. The scheme draws from two previously proposed methods, bounded conditioning (Horvitz, Suermondt, & Cooper 1989) and bound propagation (Leisink & Kappen 2003). Following the principles of cutset conditioning (Pearl 1988), our method enumerates a subset of cutset tuples and applies exact reasoning in the network instances conditioned on those tuples. The probability mass of the remaining tuples is bounded using a variant of bound propagation. We show that our new scheme improves on the earlier schemes. Introduction Computing bounds on posterior marginals is a special case of approximating posterior marginals with a desired degree of precision which is NP-hard (Dagum & Luby 1993). We address this hard problem by proposing an any-time bounding framework based on two previously proposed bounding schemes, bounded conditioning and bound propagation. Bounded conditioning (BC) (Horvitz, Suermondt, & Cooper 1989) is founded on the principle of cutsetconditioning method (Pearl 1988). Given a Bayesian network over X , evidence E ⊂ X , E=e, and a subset of variables C ⊂ X\E, we can obtain exact posterior marginals by enumerating over all cutset tuples c ∈ D(C), M=|D(C)|, using the formula: P (x|e) = PM i=1 P (x, c , e) PM i=1 P (c i, e) (1) For any assignment c=c, the computation of quantities P (x, c, e) and P (c, e) is linear in the network size if C is a loop-cutset and exponential in w if C is a w-cutset. The limitation of the cutset-conditioning method is that the number of cutset tuples M grows exponentially with the cutset size. Horvitz, Suermondt, and Cooper (1989) observed that often a small number of tuples h << M contains most of the probability mass of P (e) = ∑M i=1 P (c , e). Subsequently, they proposed to compute the probabilities P (x, c, e) and Copyright c © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. P (c, e) exactly only for the h tuples, 1 ≤ i ≤ h, with high prior probabilities P (c), while bounding the rest by their priors. Bounded conditioning (BC) was the first method to offer any-time properties and to guarantee convergence to the exact marginals with time as h→M . The scheme was validated on the example of Alarm network with 37 nodes and M=108 loop-cutset tuples. Without evidence, algorithm computed small bounds intervals, ≈0.01 or less, after generating 40 cutset instances. However, with 3 and 4 nodes assigned, the bounds interval length rose to ≈ 0.15 after processing the same 40 tuples. The latter shows how BC bounds interval increases as probability of evidence P (e) decreases. In fact, we can show that BC upper bound can become >1 when P (e) is small (Bidyuk & Dechter 2005). A related bounding scheme was proposed in (Poole 1996). Instead of enumerating variables of the cutset, it enumerates all variables and uses conflict-counting to update the function bounding the remaining probability mass. Bound propagation (BdP) scheme (Leisink & Kappen 2003) obtains bounds by iteratively solving a linear optimization problem for each variable such that the minimum and maximum of the objective function correspond to lower and upper bounds on the posterior marginals. They demonstrated the performance of the scheme on the example of Alarm network, Ising grid, and regular bi-partite graphs. In our work here, we propose a framework, which we term Any Time Bounds (ATB), that also builds upon the principles of conditioning exploring fully h cutset tuples and bounding the rest of the probability mass, spread over the unexplored tuples. The scheme improves over bounded conditioning in several ways. First, it bounds more accurately the mass of the unexplored tuples in polynomial time. Second, it uses cutset sampling (Bidyuk & Dechter 2003a; 2003b) for finding high-probability cutset tuples. Finally, utilizing an improved variant of bound propagation as a plugin within our any-time framework yields a scheme that achieves greater accuracy than either bounded conditioning or bound propagation. In general, the framework allows to plugin any bounding scheme to bound the probability mass over unexplored tuples. Background Definition 1 (belief networks) Let X={X1, ...,Xn} be a set of random variables over multi-valued domains
منابع مشابه
An Anytime Boosting Scheme for Bounding Posterior Beliefs
This paper presents an any-time scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power is in that it can use any available scheme that bounds the probability of evidence, enhance its performance in an anytime manner, and transform it effectively into bounds for posterior marginals. The scheme is novel in that using the cuts...
متن کاملActive Tuples-based Scheme for Bounding Posterior Beliefs
The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power lies in its ability to use any available scheme that bounds the probability of evidence or posterior marginals and enhance its performance in an anytime manner. The scheme uses the cutset conditioning principle to tighten existing bounding schemes a...
متن کاملPartition-based Anytime Approximation for Belief Updating
The paper presents a parameterized approximation scheme for probabilistic inference. The scheme, called Mini-Clustering (MC) extends the partition-based approximation offered by mini-bucket elimination, to tree de-compositions. The beneet of this extension is that all single variable beliefs are computed (approximately) at once, using a two-phase message-passing process along the cluster tree. ...
متن کاملMetacognitive Beliefs and Students’ Tendency toward Drug Abbuse and Cross-level Effect of School-Bounding
Objective: The present research aimed to examine positive and negative beliefs about worry and tendency of students to drug abuse in terms of cross-level effect of school-bounding. Methods: In this multi-level investigation, 1000 students of high schools were selected by means of multi-stage sampling technique. Then, they completed metacognitive questionnaire (MCQ), school-bound...
متن کاملAnytime Best+Depth-First Search for Bounding Marginal MAP
We introduce new anytime search algorithms that combine best-first with depth-first search into hybrid schemes for Marginal MAP inference in graphical models. The main goal is to facilitate the generation of upper bounds (via the bestfirst part) alongside the lower bounds of solutions (via the depth-first part) in an anytime fashion. We compare against two of the best current state-of-the-art s...
متن کامل